16 research outputs found

    Improving Collaboration Between Drivers and Automated Vehicles with Trust Processing Methods

    Full text link
    Trust has gained attention in the Human-Robot Interaction (HRI) field, as it is considered an antecedent of people's reliance on machines. In general, people are likely to rely on and use machines they trust, and to refrain from using machines they do not trust. Recent advances in robotic perception technologies open paths for the development of machines that can be aware of people's trust by observing their human behaviors. This dissertation explores the role of trust in the interactions between humans and robots, particularly Automated Vehicles (AVs). Novel methods and models are proposed for perceiving and processing drivers' trust in AVs and for determining both humans' natural trust and robots' artificial trust. Two high-level problems are addressed in this dissertation: (1) the problem of avoiding or reducing miscalibrations of drivers' trust in AVs, and (2) the problem of how trust can be used to dynamically allocate tasks between a human and a robot that collaborate. A complete solution is proposed for the problem of avoiding or reducing trust miscalibrations. This solution combines methods for estimating and influencing drivers' trust through interactions with the AV. Three main contributions stem from that solution: (i) the characterization of risk factors that affect drivers’ trust in AVs, which provided theoretical evidence for the development of a linear model for driver trust in AVs; (ii) the development of a new method for real-time trust estimation, which leveraged the trust linear model mentioned above for the implementation of a Kalman-filter-based approach, able to provide numerical estimates from the processing of drivers' behavioral measurements; and (iii) the development of a new method for trust calibration, which identifies trust miscalibration instances from comparisons between drivers' trust in the AV and that AV's capabilities, and triggers messages from the AV to the driver. These messages are effective for encouraging or warning drivers that are undertrusting or overtrusting the AV capabilities respectively as shown by the obtained results. Although the development of a trust-based solution for dynamically allocating tasks between a human and a robot (i.e., the second high-level problem addressed in this dissertation) remains an open problem, we take a step forward in that direction. The fourth contribution of this dissertation is the development of a unified bi-directional model for predicting natural and artificial trust. This trust model is based on mathematical representations of both the trustee agent's capabilities and the required capabilities for the execution of a task. Trust emerges from comparisons between the agent capabilities and task requirements, roughly replicating the following logic: if a trustee agent's capabilities exceed the requirements for executing a certain task, then the agent can be highly trusted (to execute that task); conversely, if that trustee agent's capabilities fall short of that task requirements, trust should be low. In this trust model, the agent's capabilities are represented by random variables that are dynamically updated over interactions between the trustor and the trustee whenever the trustee is successful or fails in the execution of a task. These capability representations allow for the numerical computation of human's trust or robot's trust, which is represented by the probability of a given trustee agent to execute a given task successfully.PHDRoboticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169615/1/azevedo_1.pd

    Context-Adaptive Management of Drivers’ Trust in Automated Vehicles

    Full text link
    Automated vehicles (AVs) that intelligently interact with drivers must build a trustworthy relationship with them. A calibrated level of trust is fundamental for the AV and the driver to collaborate as a team. Techniques that allow AVs to perceive drivers’ trust from drivers’ behaviors and react accordingly are, therefore, needed for context-aware systems designed to avoid trust miscalibrations. This letter proposes a framework for the management of drivers’ trust in AVs. The framework is based on the identification of trust miscalibrations (when drivers’ undertrust or overtrust the AV) and on the activation of different communication styles to encourage or warn the driver when deemed necessary. Our results show that the management framework is effective, increasing (decreasing) trust of undertrusting (overtrusting) drivers, and reducing the average trust miscalibration time periods by approximately 40%. The framework is applicable for the design of SAE Level 3 automated driving systems and has the potential to improve the performance and safety of driver–AV teams.U.S. Army CCDC/GVSCAutomotive Research CenterNational Science FoundationPeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/162571/1/Azevedo-Sa et al. 2020 with doi.pdfSEL

    Real-Time Estimation of Drivers' Trust in Automated Driving Systems

    Full text link
    Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman lter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task (NDRT). We conducted a study (n = 80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identi cation of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.National Science FoundationBrazilian Army's Department of Science and TechnologyAutomotive Research Center (ARC) at the University of MichiganU.S. Army CCDC/GVSC (government contract DoD-DoA W56HZV14-2-0001).Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/162572/1/Azevedo Sa et al. 2020.pdfSEL

    Using Trust for Heterogeneous Human-Robot Team Task Allocation

    Full text link
    Human-robot teams have the ability to perform better across various tasks than human-only and robot-only teams. However, such improvements cannot be realized without proper task allocation. Trust is an important factor in teaming relationships, and can be used in the task allocation strategy. Despite the importance, most existing task allocation strategies do not incorporate trust. This paper reviews select studies on trust and task allocation. We also summarize and discuss how a bi-directional trust model can be used for a task allocation strategy. The bi-directional trust model represents task requirements and agents by their capabilities, and can be used to predict trust for both existing and new tasks. Our task allocation approach uses predicted trust in the agent and expected total reward for task assignment. Finally, we present some directions for future work, including the incorporation of trust from the human and human capacity for task allocation, and a negotiation phase for resolving task disagreements.Army Research Lab under grant #F061352National Science FoundationPeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/170403/1/Ali et al. 2021 AAAI 2020 Post.pdfDescription of Ali et al. 2021 AAAI 2020 Post.pdf : ArticleSEL

    A Unified Bi-directional Model for Natural and Artificial Trust in Human–Robot Collaboration

    Full text link
    We introduce a novel capabilities-based bidirectional multi-task trust model that can be used for trust prediction from either a human or a robotic trustor agent. Tasks are represented in terms of their capability requirements, while trustee agents are characterized by their individual capabilities. Trustee agents’ capabilities are not deterministic; they are represented by belief distributions. For each task to be executed, a higher level of trust is assigned to trustee agents who have demonstrated that their capabilities exceed the task’s requirements. We report results of an online experiment with 284 participants, revealing that our model outperforms existing models for multi-task trust prediction from a human trustor. We also present simulations of the model for determining trust from a robotic trustor. Our model is useful for control authority allocation applications that involve human–robot teams.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/167859/1/Azevedo-Sa et al. 2021.pdfDescription of Azevedo-Sa et al. 2021.pdf : PreprintSEL

    Using Trust in Automation to Enhance Driver-(Semi)Autonomous Vehicle Interaction and Improve Team Performance

    Full text link
    Trust in robots has been gathering attention from multiple directions, as it has a special relevance in the theoretical descriptions of human-robot interactions. It is essential for reaching high acceptance and usage rates of robotic technologies in society, as well as for enabling effective human-robot teaming. Researchers have been trying to model the development of trust in robots to improve the overall “rapport” between humans and robots. Unfortunately, miscalibration of trust in automation is a common issue that jeopardizes the effectiveness of automation use. It happens when a user’s trust levels are not appropriate to the capabilities of the automation being used. Users can be: under-trusting the automation—when they do not use the functionalities that the machine can perform correctly because of a “lack of trust”; or over-trusting the automation—when, due to an “excess of trust”, they use the machine in situations where its capabilities are not adequate. The main objective of this work is to examine driver’s trust development in the ADS. We aim to model how risk factors (e.g.: false alarms and misses from the ADS) and the short term interactions associated with these risk factors influence the dynamics of drivers’ trust in the ADS. The driving context facilitates the instrumentation to measure trusting behaviors, such as drivers’ eye movements and usage time of the automated features. Our findings indicate that a reliable characterization of drivers’ trusting behaviors and a consequent estimation of trust levels is possible. We expect that these techniques will permit the design of ADSs able to adapt their behaviors to attempt to adjust driver’s trust levels. This capability could avoid under- and over trusting, which could harm their safety or their performance.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/167861/1/ISTDM-2021-Extended-Abstract-0118.pdfDescription of ISTDM-2021-Extended-Abstract-0118.pdf : PaperSEL

    Handling Trust Between Drivers and Automated Vehicles for Improved Collaboration

    Full text link
    Advances in perception and artificial intelligence technology are expected to lead to seamless interaction between humans and robots. Trust in robots has been evolving from the theory on trust in automation, with a fundamental difference: unlike traditional automation, robots could adjust their behaviors depending on how their human counterparts appear to be trusting them or how humans appear to be trustworthy. In this extended abstract I present my research on methods for processing trust in the particular context of interactions between a driver and an automated vehicle, which has the goal of achieving higher safety and performance standards for the team formed by those human and robotic agents.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/164968/1/Azevedo-Sa et al. 2021.pdfSEL

    Error Type, Risk, Performance, and Trust: Investigating the Different Impacts of false alarms and misses on Trust and Performance

    Full text link
    Semi-autonomous vehicles are intended to give drivers multitasking flexibility and to improve driving safety. Yet, drivers have to trust the vehicle’s autonomy to fully leverage the vehicle’s capability. Prior research on driver’s trust in a vehicle’s autonomy has normally assumed that the autonomy was without error. Unfortunately, this may be at times an unrealistic assumption. To address this shortcoming, we seek to examine the impacts of automation errors on the relationship between drivers’ trust in automation and their performance on a non-driving secondary task. More specifically, we plan to investigate false alarms and misses in both low and high risk conditions. To accomplish this, we plan to utilize a 2 (risk conditions) × 4 (alarm conditions) mixed design. The findings of this study are intended to inform Autonomous Driving Systems (ADS) designers by permitting them to appropriately tune the sensitivity of alert systems by understanding the impacts of error type and varying risk conditions.This research is supported in part by the Automotive Research Center (ARC) at the University of Michigan, with funding from government contract DoD-DoA W56HZV-14-2-0001, through the U.S. Army Combat Capabilities Development Command (CCDC)/Ground Vehicle Systems Center (GVSC).Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/149648/1/GVSETS 2019_FinalPaper.pdfDescription of GVSETS 2019_FinalPaper.pdf : Main Fil

    Comparing the Effects of False Alarms and Misses on Humans’ Trust in (Semi)Autonomous Vehicles

    Full text link
    Trust in automated driving systems is crucial for effective driver- (semi)autonomous vehicles interaction. Drivers that do not trust the system appropriately are not able to leverage its benefits. This study presents a mixed design user experiment where participants conducted a non-driving task while traveling in a simulated semi-autonomous vehicle with forward collision alarm and emergency braking functions. Occasionally, the system missed obstacles or provided false alarms.We varied these system error types as well as road shapes, and measured the effects of these variations on trust development. Results reveal that misses are more harmful to trust development than false alarms, and that these effects are strengthened by operation on risky roads. Our findings provide additional insight into the development of trust in automated driving systems, and are useful for the design of such technologies.Automotive Research Center at the University of Michigan, through the U.S. Army CCDC/GVSCPeer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/153524/1/Azevedo-Sa et al. 2020.pdfDescription of Azevedo-Sa et al. 2020.pdf : Mainfil

    A922 Sequential measurement of 1 hour creatinine clearance (1-CRCL) in critically ill patients at risk of acute kidney injury (AKI)

    Get PDF
    Meeting abstrac
    corecore